TagC: a Language with Computation Distribution Directives for Expressing Parallelism
نویسندگان
چکیده
In this paper, we present TagC, a new language based on C for distributing parallel and/or pipelined computations. TagC simpliies the programming of distributed memory machines by providing the programmer with a global name space and a mechanism for specifying computation distribution. The main advantage of our paradigm is that it provides a unique framework for expressing both functional parallelism and data parallelism. 1 Background Parallel computers are becoming more and more accessible to scientists and programmers. However, they are not likely to be widely accepted until they are easy to program. To obtain the full performance of a distributed-memory parallel machine, the programmer is forced to manually distribute code and data, and manage the resulting communications by explicitly adding send and receive messages. Recently, new high-level programming languages were developped that make use of a global name space to facilitate the writing of programs. In this approach, the currently dominant programming model for distributed-memory machines is data-parallel paradigm with user-directed data distribution. With the owner-computes rule, the compiler derive the corresponding computation distribution. For example, this approach has been adopted in Vienna Fortran, Fortran-D, and High Performance Fortran 6, 13, 12]. In our approach, the user speciies the computation distribution rather than the data distribution. The later is derived from the former either at compile time or at runtime time. Hence, the user has more exibility to express and organize parallelism; both functional and data parallelism can be naturally expressed. Computation distribution as a programming model has been utilized on shared-memory parallel systems. See for example 4]. Yet, it is applicable to distributed-memory machines as well, as demonstrated here. Additional motivation on this paradigm can be found in 15]. In this paper, we present TagC, a new language based on C for distributing parallel and/or pipelined computations, and its implementation. It was developed for typical distributed-memory MIMD machines, the Alex Informatique AVX Series 1 and 2 machines 2]. With TagC and its compiler, the programmer can directly concentrate on high-level algorithm design and performance issues. The TagC language is an extension to C but the concept is in fact directly applicable to other imperative languages such as Fortran. In this section, we brieey illustrate the features of TagC by comparing two versions of a code written in C and TagC respectively. The program, that generates a Mandelbrot image, consists of a main program and three functions. See Figure 1. …
منابع مشابه
High Performance Fortran 2.0
High Performance Fortran (HPF) is an informal standard for extensions to Fortran to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for expressing data distribution across multiple memories, extra facilities for expressing data parallel and concurrent execution, and a mechanism for interfacing HPF to oth...
متن کاملExtending Synchronization Constructs in OpenMP to Exploit Pipeline Parallelism on Heterogeneous Multi-core
The ability of expressing multiple-levels of parallelism is one of the significant features in OpenMP parallel programming model. However, pipeline parallelism is not well supported in OpenMP. This paper proposes extensions to OpenMP directives, aiming at expressing pipeline parallelism effectively. The extended directives are divided into two groups. One can define the precedence at thread lev...
متن کاملExploiting Data Locality on Scalable
OpenMP ooers a high-level interface for parallel programming on scalable shared memory (SMP) architectures providing the user with simple work-sharing directives while relying on the compiler to generate parallel programs based on thread parallelism. However, the lack of language features for exploiting data locality often results in poor performance since the non-uniform memory access times on...
متن کاملA PLDS: Partitioning Linked Data Structures for Parallelism
Recently, parallelization of computations in the presence of dynamic data structures has shown promising potential. In this paper, we present PLDS, a system for easily expressing and efficiently exploiting parallelism in computations that are based upon dynamic linked data structures. PLDS improves the execution efficiency by providing support for data partitioning and then distributing computa...
متن کاملCompiling Fortran 90D/HPF for Distributed Memory MIMD Computers
This paper describes the design of the Fortran90D/HPF compiler, a source-to-source parallel compiler for distributed memory systems being developed at Syracuse University. Fortran 90D/HPF is a data parallel language with special directives to specify data alignment and distributions. A systematic methodology to process distribution directives of Fortran 90D/HPF is presented. Furthermore, techni...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994